manifold learning algorithm
No Pressure! Addressing the Problem of Local Minima in Manifold Learning Algorithms
Nonlinear embedding manifold learning methods provide invaluable visual insights into a structure of high-dimensional data. However, due to a complicated nonconvex objective function, these methods can easily get stuck in local minima and their embedding quality can be poor. We propose a natural extension to several manifold learning methods aimed at identifying pressured points, i.e. points stuck in the poor local minima and have poor embedding quality. We show that the objective function can be decreased by temporarily allowing these points to make use of an extra dimension in the embedding space. Our method is able to improve the objective function value of existing methods even after they get stuck in a poor local minimum.
Reviews: No Pressure! Addressing the Problem of Local Minima in Manifold Learning Algorithms
I have read the authors' response and other reviewers' comments. I choose to raise my score. The way these points are utilized is also inspirational. Cons: - Like many other work in nonlinear dimension reduction, this paper does not provide a systematic, convincing way to evaluate and compare the new approach against previous methods which are claimed to be less effective. The visual comparison in Figure 6 appears cherry-picking in methodology.
Reviews: No Pressure! Addressing the Problem of Local Minima in Manifold Learning Algorithms
Dimensionality reduction such as tSNE is widely used to visualize and interpret (and often over interpret) high-dimensional data. Thus such visualization has become a staple in the field and it is has been a while since I have seen substantial progress in improving such visualization techniques and this paper is such a case. Reviewer 1 summarizes the contribution and its importance better than I could word it myself: This work has two main contributions, which are sufficiently significant given the interest in visualization and dimensionality reduction via SNE, tSNE, and further extensions: 1. Identification of pressure points that are "stuck" in suboptimal location in the embedding due to local minima caused by dimensionality constraints. The manuscript is well written, well motivated, and convincingly establishes the reasoning behind the proposed approach as well as its effectiveness. All three reviewers agree on accepting the paper. All reviewers agreed that the paper provided new insights, a novel approach, a valuable practical contribution which is extensively validated on multiple datasets and is well written.
No Pressure! Addressing the Problem of Local Minima in Manifold Learning Algorithms
Nonlinear embedding manifold learning methods provide invaluable visual insights into a structure of high-dimensional data. However, due to a complicated nonconvex objective function, these methods can easily get stuck in local minima and their embedding quality can be poor. We propose a natural extension to several manifold learning methods aimed at identifying pressured points, i.e. points stuck in the poor local minima and have poor embedding quality. We show that the objective function can be decreased by temporarily allowing these points to make use of an extra dimension in the embedding space. Our method is able to improve the objective function value of existing methods even after they get stuck in a poor local minimum.
No Pressure! Addressing the Problem of Local Minima in Manifold Learning Algorithms
Nonlinear embedding manifold learning methods provide invaluable visual insights into a structure of high-dimensional data. However, due to a complicated nonconvex objective function, these methods can easily get stuck in local minima and their embedding quality can be poor. We propose a natural extension to several manifold learning methods aimed at identifying pressured points, i.e. points stuck in the poor local minima and have poor embedding quality. We show that the objective function can be decreased by temporarily allowing these points to make use of an extra dimension in the embedding space. Our method is able to improve the objective function value of existing methods even after they get stuck in a poor local minimum. Papers published at the Neural Information Processing Systems Conference.